# 128k long text processing

Mistral Small 3.2 24B Instruct 2506 GGUF
Apache-2.0
Mistral Small 3.2 24B Instruct 2506 is a multilingual large language model that supports text and image input and text output, with a context length of 128k.
Image-to-Text Supports Multiple Languages
M
lmstudio-community
5,588
1
Qwen2.5 VL 7B Instruct GGUF
Apache-2.0
The quantized model of Qwen2.5 VL 7B Instruct is a powerful multimodal model that supports image and text input and generates text output, with wide application value in multiple fields.
Image-to-Text English
Q
lmstudio-community
11.29k
1
Dewey En Beta
MIT
Dewey is a novel long-context embedding model based on the ModernBERT architecture, supporting a 128k context window and excelling in long-document retrieval tasks.
Text Embedding Transformers English
D
infgrad
447
14
Gemma 3 4b It MAX NEO Imatrix GGUF
Apache-2.0
An extreme quantization version based on Google's Gemma-3 model, enhanced with NEO Imatrix technology, supporting 128k context length and suitable for full-scenario tasks
Large Language Model
G
DavidAU
2,558
7
Llama 3.2 3B Instruct QLORA INT4 EO8
Llama 3.2 is a multilingual large language model launched by Meta, offering two parameter scales of 1B and 3B, supporting various language tasks, and performing better than existing open-source and closed-source models.
Large Language Model PyTorch Supports Multiple Languages
L
meta-llama
289
68
Mistral Nemo Base 2407 Chatml
Apache-2.0
Mistral-Nemo-Base-2407 is a 12-billion-parameter generative text pre-training model jointly trained by Mistral AI and NVIDIA, outperforming models of similar or smaller scale.
Large Language Model Transformers Supports Multiple Languages
M
IntervitensInc
191
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase